Goto

Collaborating Authors

 fetal ultrasound image


FPDANet: A Multi-Section Classification Model for Intelligent Screening of Fetal Ultrasound

Chen, Minglang, He, Jie, Xu, Caixu, Liang, Bocheng, Li, Shengli, He, Guannan, Tao, Xiongjie

arXiv.org Artificial Intelligence

ResNet has been widely used in image classification tasks due to its ability to model the residual dependence of constant mappings for linear computation. However, the ResNet method adopts a unidirectional transfer of features and lacks an effective method to correlate contextual information, which is not effective in classifying fetal ultrasound images in the classification task, and fetal ultrasound images have problems such as low contrast, high similarity, and high noise. Therefore, we propose a bilateral multi-scale information fusion network-based FPDANet to address the above challenges. Specifically, we design the positional attention mechanism (DAN) module, which utilizes the similarity of features to establish the dependency of different spatial positional features and enhance the feature representation. In addition, we design a bilateral multi-scale (FPAN) information fusion module to capture contextual and global feature dependencies at different feature scales, thereby further improving the model representation. FPDANet classification results obtained 91.05\% and 100\% in Top-1 and Top-5 metrics, respectively, and the experimental results proved the effectiveness and robustness of FPDANet.


FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis

Maani, Fadillah, Saeed, Numan, Saleem, Tausifa, Farooq, Zaid, Alasmawi, Hussain, Diehl, Werner, Mohammad, Ameera, Waring, Gareth, Valappi, Saudabi, Bricker, Leanne, Yaqub, Mohammad

arXiv.org Artificial Intelligence

Foundation models are becoming increasingly effective in the medical domain, offering pre-trained models on large datasets that can be readily adapted for downstream tasks. Despite progress, fetal ultrasound images remain a challenging domain for foundation models due to their inherent complexity, often requiring substantial additional training and facing limitations due to the scarcity of paired multimodal data. To overcome these challenges, here we introduce FetalCLIP, a vision-language foundation model capable of generating universal representation of fetal ultrasound images. FetalCLIP was pre-trained using a multimodal learning approach on a diverse dataset of 210,035 fetal ultrasound images paired with text. This represents the largest paired dataset of its kind used for foundation model development to date. This unique training approach allows FetalCLIP to effectively learn the intricate anatomical features present in fetal ultrasound images, resulting in robust representations that can be used for a variety of downstream applications. In extensive benchmarking across a range of key fetal ultrasound applications, including classification, gestational age estimation, congenital heart defect (CHD) detection, and fetal structure segmentation, FetalCLIP outperformed all baselines while demonstrating remarkable generalizability and strong performance even with limited labeled data. We plan to release the FetalCLIP model publicly for the benefit of the broader scientific community.


Diffusion-based Iterative Counterfactual Explanations for Fetal Ultrasound Image Quality Assessment

Pegios, Paraskevas, Lin, Manxi, Weng, Nina, Svendsen, Morten Bo Søndergaard, Bashir, Zahra, Bigdeli, Siavash, Christensen, Anders Nymark, Tolsgaard, Martin, Feragen, Aasa

arXiv.org Artificial Intelligence

Obstetric ultrasound image quality is crucial for accurate diagnosis and monitoring of fetal health. However, producing high-quality standard planes is difficult, influenced by the sonographer's expertise and factors like the maternal BMI or the fetus dynamics. In this work, we propose using diffusion-based counterfactual explainable AI to generate realistic high-quality standard planes from low-quality non-standard ones. Through quantitative and qualitative evaluation, we demonstrate the effectiveness of our method in producing plausible counterfactuals of increased quality. This shows future promise both for enhancing training of clinicians by providing visual feedback, as well as for improving image quality and, consequently, downstream diagnosis and monitoring.


Removing confounding information from fetal ultrasound images

Mikolaj, Kamil, Lin, Manxi, Bashir, Zahra, Svendsen, Morten Bo Søndergaard, Tolsgaard, Martin, Nymark, Anders, Feragen, Aasa

arXiv.org Artificial Intelligence

Confounding information in the form of text or markings embedded in medical images can severely affect the training of diagnostic deep learning algorithms. However, data collected for clinical purposes often have such markings embedded in them. In dermatology, known examples include drawings or rulers that are overrepresented in images of malignant lesions. In this paper, we encounter text and calipers placed on the images found in national databases containing fetal screening ultrasound scans, which correlate with standard planes to be predicted. In order to utilize the vast amounts of data available in these databases, we develop and validate a series of methods for minimizing the confounding effects of embedded text and calipers on deep learning algorithms designed for ultrasound, using standard plane classification as a test case.


AI Detects Rare Birth Defects in Fetal Ultrasound

#artificialintelligence

Artificial intelligence (AI) deep learning is rapidly emerging as an innovative diagnostic tool for life sciences and health care. A new study demonstrates how AI deep learning can be used to diagnose a rare embryonic developmental disorder called cystic hygroma within the first trimester of pregnancy from fetal ultrasound images. "In this proof-of-concept study, we demonstrate the potential for deep-learning to support early and reliable identification of cystic hygroma from first trimester ultrasound scans," wrote Dr. Mark Walker MD, FRCSC, MSc, MHCM, at the University of Ottawa (uOttawa) Faculty of Medicine and his research team. Dr. Walker is a perinatologist, a clinical epidemiologist, high-risk obstetrician, co-founder of the OMNI Research Group (Obstetrics, Maternal and Newborn Investigations) at The Ottawa Hospital, which is the largest maternal and newborn research group in Canada, and a professor and the Vice-Dean of Internationalization and Global Health at the uOttawa Faculty of Medicine. He has published over 160 peer reviewed articles.


Using AI to Diagnose Birth Defect in Fetal Ultrasound Images - Neuroscience News

#artificialintelligence

Summary: Using datasets of fetal ultrasounds, a new AI algorithm is able to detect cystic hygroma, a rare embryonic developmental disorder, within the first trimester of pregnancy. In a new proof-of-concept study led by Dr. Mark Walker at the uOttawa Faculty of Medicine, researchers are pioneering the use of a unique AI-based deep learning model as an assistive tool for the rapid and accurate reading of ultrasound images. It's trailblazing work because although deep learning models have become increasingly popular in interpreting medical images and detecting disorders, figuring out how its application can work in obstetric ultrasonography is in its nascent stages. Few AI-enabled studies have been published in this field. The goal of the team's study was to demonstrate the potential for deep-learning architecture to support early and reliable identification of cystic hygroma from first trimester ultrasound scans.


Deep Learning-based Quality Assessment of Clinical Protocol Adherence in Fetal Ultrasound Dating Scans

Cengiz, Sevim, Yaqub, Mohammad

arXiv.org Artificial Intelligence

To assess fetal health during pregnancy, doctors use the gestational age (GA) calculation based on the Crown Rump Length (CRL) measurement in order to check for fetal size and growth trajectory. However, GA estimation based on CRL, requires proper positioning of calipers on the fetal crown and rump view, which is not always an easy plane to find, especially for an inexperienced sonographer. Finding a slightly oblique view from the true CRL view could lead to a different CRL value and therefore incorrect estimation of GA. This study presents an AI-based method for a quality assessment of the CRL view by verifying 7 clinical scoring criteria that are used to verify the correctness of the acquired plane. We show how our proposed solution achieves high accuracy on the majority of the scoring criteria when compared to an expert. We also show that if such scoring system is used, it helps identify poorly acquired images accurately and hence may help sonographers acquire better images which could potentially lead to a better assessment of conditions such as Intrauterine Growth Restriction (IUGR).